2,118 research outputs found

    Encoding of Intention and Spatial Location in the Posterior Parietal Cortex

    Get PDF
    The posterior parietal cortex is functionally situated between sensory cortex and motor cortex. The responses of cells in this area are difficult to classify as strictly sensory or motor, since many have both sensory- and movement-related activities, as well as activities related to higher cognitive functions such as attention and intention. In this review we will provide evidence that the posterior parietal cortex is an interface between sensory and motor structures and performs various functions important for sensory-motor integration. The review will focus on two specific sensory-motor tasks-the formation of motor plans and the abstract representation of space. Cells in the lateral intraparietal area, a subdivision of the parietal cortex, have activity related to eye movements the animal intends to make. This finding represents the lowest stage in the sensory-motor cortical pathway in which activity related to intention has been found and may represent the cortical stage in which sensory signals go "over the hump" to become intentions and plans to make movements. The second part of the review will discuss the representation of space in the posterior parietal cortex. Encoding spatial locations is an essential step in sensory-motor transformations. Since movements are made to locations in space, these locations should be coded invariant of eye and head position or the sensory modality signaling the target for a movement Data will be reviewed demonstrating that there exists in the posterior parietal cortex an abstract representation of space that is constructed from the integration of visual, auditory, vestibular, eye position, and propriocaptive head position signals. This representation is in the form of a population code and the above signals are not combined in a haphazard fashion. Rather, they are brought together using a specific operation to form "planar gain fields" that are the common foundation of the population code for the neural construct of space

    Visual and eye movement functions of the posterior parietal cortex

    Get PDF
    Lesions of the posterior parietal area in humans produce interesting spatial-perceptual and spatial-behavioral deficits. Among the more important deficits observed are loss of spatial memories, problems representing spatial relations in models or drawings, disturbances in the spatial distribution of attention, and the inability to localize visual targets. Posterior parietal lesions in nonhuman primates also produce visual spatial deficits not unlike those found in humans. Mountcastle and his colleagues were the first to explore this area, using single cell recording techniques in behaving monkeys over 13 years ago. Subsequent work by Mountcastle, Lynch and colleagues, Hyvarinen and colleagues, Robinson, Goldberg & Stanton, and Sakata and colleagues during the period of the late 1970s and early 1980s provided an informational and conceptual foundation for exploration of this fascinating area of the brain. Four new directions of research that are presently being explored from this foundation are reviewed in this article. 1. The anatomical and functional organization of the inferior parietal lobule is presently being investigated with neuroanatomical tracing and single cell recording techniques. This area is now known to be comprised of at least four separate cortical fields. 2. Neural mechanisms for spatial constancy are being explored. In area 7a information about eye position is found to be integrated with visual inputs to produce representations of visual space that are head-centered (the meaning of a head-centered coordinate system is explained on p. 13). 3. The role of the posterior parietal cortex, and the pathways projecting into this region, in processing information about motion in the visual world is under investigation. Visual areas within the posterior parietal cortex may play a role in extracting higher level motion information including the perception of structure-from-motion. 4. A previously unexplored area within the intraparietal sulcus has been found whose cells hold a representation in memory of planned eye movements. Special experimental protocols have shown that these cells code the direction and amplitude of intended movements in motor coordinates and suggest that this area plays a role in motor planning

    How we see

    Get PDF
    The visual world is imaged on the retinas of our eyes. However, "seeing"' is not a result of neural functions within the eyes but rather a result of what the brain does with those images. Our visual perceptions are produced by parts of the cerebral cortex dedicated to vision. Although our visual awareness appears unitary, different parts of the cortex analyze color, shape, motion, and depth information. There are also special mechanisms for visual attention, spatial awareness, and the control of actions under visual guidance. Often lesions from stroke or other neurological diseases will impair one of these subsystems, leading to unusual deficits such as the inability to recognize faces, the loss of awareness of half of visual space, or the inability to see motion or color

    Interior maps in posterior pareital cortex

    Get PDF
    The posterior parietal cortex (PPC), historically believed to be a sensory structure, is now viewed as an area important for sensory-motor integration. Among its functions is the forming of intentions, that is, high-level cognitive plans for movement. There is a map of intentions within the PPC, with different subregions dedicated to the planning of eye movements, reaching movements, and grasping movements. These areas appear to be specialized for the multisensory integration and coordinate transformations required to convert sensory input to motor output. In several subregions of the PPC, these operations are facilitated by the use of a common distributed space representation that is independent of both sensory input and motor output. Attention and learning effects are also evident in the PPC. However, these effects may be general to cortex and operate in the PPC in the context of sensory-motor transformations

    Coding of the Reach Vector in Parietal Area 5d

    Get PDF
    Competing models of sensorimotor computation predict different topological constraints in the brain. Some models propose population coding of particular reference frames in anatomically distinct nodes, whereas others require no such dedicated subpopulations and instead predict that regions will simultaneously code in multiple, intermediate, reference frames. Current empirical evidence is conflicting, partly due to difficulties involved in identifying underlying reference frames. Here, we independently varied the locations of hand, gaze, and target over many positions while recording from the dorsal aspect of parietal area 5. We find that the target is represented in a predominantly hand-centered reference frame here, contrasting with the relative code seen in dorsal premotor cortex and the mostly gaze-centered reference frame in the parietal reach region. This supports the hypothesis that different nodes of the sensorimotor circuit contain distinct and systematic representations, and this constrains the types of computational model that are neurobiologically relevant

    Brain Control of Movement Execution Onset Using Local Field Potentials in Posterior Parietal Cortex

    Get PDF
    The precise control of movement execution onset is essential for safe and autonomous cortical motor prosthetics. A recent study from the parietal reach region (PRR) suggested that the local field potentials (LFPs) in this area might be useful for decoding execution time information because of the striking difference in the LFP spectrum between the plan and execution states (Scherberger et al., 2005). More specifically, the LFP power in the 0–10 Hz band sharply rises while the power in the 20–40 Hz band falls as the state transitions from plan to execution. However, a change of visual stimulus immediately preceded reach onset, raising the possibility that the observed spectral change reflected the visual event instead of the reach onset. Here, we tested this possibility and found that the LFP spectrum change was still time locked to the movement onset in the absence of a visual event in self-paced reaches. Furthermore, we successfully trained the macaque subjects to use the LFP spectrum change as a "go" signal in a closed-loop brain-control task in which the animals only modulated the LFP and did not execute a reach. The execution onset was signaled by the change in the LFP spectrum while the target position of the cursor was controlled by the spike firing rates recorded from the same site. The results corroborate that the LFP spectrum change in PRR is a robust indicator for the movement onset and can be used for control of execution onset in a cortical prosthesis

    Separate representations of target and timing cue locations in the supplementary eye fields

    Get PDF
    When different stimuli indicate where and when to make an eye movement, the brain areas involved in oculomotor control must selectively plan an eye movement to the stimulus that encodes the target position and also encode the information available from the timing cue. This could pose a challenge to the oculomotor system since the representation of the timing stimulus location in one brain area might be interpreted by downstream neurons as a competing motor plan. Evidence from diverse sources has suggested that the supplementary eye fields (SEF) play an important role in behavioral timing, so we recorded single-unit activity from SEF to characterize how target and timing cues are encoded in this region. Two monkeys performed a variant of the memory-guided saccade task, in which a timing stimulus was presented at a randomly chosen eccentric location. Many spatially tuned SEF neurons encoded only the location of the target and not the timing stimulus, whereas several other SEF neurons encoded the location of the timing stimulus and not the target. The SEF population therefore encoded the location of each stimulus with largely distinct neuronal subpopulations. For comparison, we recorded a small population of lateral intraparietal (LIP) neurons in the same task. We found that most LIP neurons that encoded the location of the target also encoded the location of the timing stimulus after its presentation, but selectively encoded the intended eye movement plan in advance of saccade initiation. These results suggest that SEF, by conditionally encoding the location of instructional stimuli depending on their meaning, can help identify which movement plan represented in other oculomotor structures, such as LIP, should be selected for the next eye movement

    A Relative Position Code for Saccades in Dorsal Premotor Cortex

    Get PDF
    Spatial computations underlying the coordination of the hand and eye present formidable geometric challenges. One way for the nervous system to simplify these computations is to directly encode the relative position of the hand and the center of gaze. Neurons in the dorsal premotor cortex (PMd), which is critical for the guidance of arm-reaching movements, encode the relative position of the hand, gaze, and goal of reaching movements. This suggests that PMd can coordinate reaching movements with eye movements. Here, we examine saccade-related signals in PMd to determine whether they also point to a role for PMd in coordinating visual–motor behavior. We first compared the activity of a population of PMd neurons with a population of parietal reach region (PRR) neurons. During center-out reaching and saccade tasks, PMd neurons responded more strongly before saccades than PRR neurons, and PMd contained a larger proportion of exclusively saccade-tuned cells than PRR. During a saccade relative position-coding task, PMd neurons encoded saccade targets in a relative position code that depended on the relative position of gaze, the hand, and the goal of a saccadic eye movement. This relative position code for saccades is similar to the way that PMd neurons encode reach targets. We propose that eye movement and eye position signals in PMd do not drive eye movements, but rather provide spatial information that links the control of eye and arm movements to support coordinated visual–motor behavior
    corecore